We summarize our TRECVID 2022 Ad-hoc Video Search (AVS) experiments. Our solution is built with two new techniques, namely Lightweight Attentional Feature Fusion (LAFF) for combining diverse visual / textual features and Bidirectional Negation Learning (BNL) for addressing queries that contain negation cues. In particular, LAFF performs feature fusion at both early and late stages and at both text and video ends to exploit diverse (off-the-shelf) features. Compared to multi-head self attention, LAFF is much more compact yet more effective. Its attentional weights can also be used for selecting fewer features, with the retrieval performance mostly preserved. BNL trains a negation-aware video retrieval model by minimizing a bidirectionally constrained loss per triplet, where a triplet consists of a given training video, its original description and a partially negated description. For video feature extraction, we use pre-trained CLIP, BLIP, BEiT, ResNeXt-101 and irCSN. As for text features, we adopt bag-of-words, word2vec, CLIP and BLIP. Our training data consists of MSR-VTT, TGIF and VATEX that were used in our previous participation. In addition, we automatically caption the V3C1 collection for pre-training. The 2022 edition of the TRECVID benchmark has again been a fruitful participation for the RUCMM team. Our best run, with an infAP of 0.262, is ranked at the second place teamwise.
translated by 谷歌翻译
图像操纵检测的关键研究问题是如何学习对新型数据中的操纵敏感的宽大功能,而特定于防止在真实图像上的误报。目前的研究强调了敏感性,特异性主要忽略了。在本文中,我们通过多视图特征学习和多尺度监督来解决两个方面。通过利用篡改区域周围的噪声分布和边界伪影,前者旨在学习语义 - 不可知,更广泛的特征。后者允许我们从真实的图像中学习以通过依赖于语义分割损耗的现有技术来考虑非凡的图像。我们的想法是由我们术语MVSS-Net及其增强版MVSS-Net ++的新网络实现。六个公共基准数据集的综合实验证明了MVSS-Net系列的可行性,以实现像素级和图像级操作检测。
translated by 谷歌翻译
视觉位置识别(VPR)不仅对于自动驾驶车辆的定位和映射至关重要,而且对于视力受损的人群的辅助导航至关重要。为了大规模启用长期VPR系统,需要解决一些挑战。首先,不同的应用程序可能需要不同的图像视图方向,例如自动驾驶汽车的前视图,而低视力人的侧视图。其次,由于行人和车辆身份信息的成像,大都市场景中的VPR通常会引起隐私问题,呼吁在VPR查询和数据库构建之前需要数据匿名化。这两个因素都可能导致VPR性能变化,而尚未得到很好的理解。 To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km by 2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that对于当前的VPR方法,侧视观点明显更具挑战性,而数据匿名的影响几乎可以忽略不计,以及我们的假设解释和深入的分析。
translated by 谷歌翻译
Humans constantly interact with objects in daily life tasks. Capturing such processes and subsequently conducting visual inferences from a fixed viewpoint suffers from occlusions, shape and texture ambiguities, motions, etc. To mitigate the problem, it is essential to build a training dataset that captures free-viewpoint interactions. We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects. To process the HODome dataset, we develop NeuralDome, a layer-wise neural processing pipeline tailored for multi-view video inputs to conduct accurate tracking, geometry reconstruction and free-view rendering, for both human subjects and objects. Extensive experiments on the HODome dataset demonstrate the effectiveness of NeuralDome on a variety of inference, modeling, and rendering tasks. Both the dataset and the NeuralDome tools will be disseminated to the community for further development.
translated by 谷歌翻译
强化学习方法作为一种有前途的技术在自由浮动太空机器人的运动计划中取得了卓越的成果。但是,由于计划维度的增加和系统动态耦合的加剧,双臂自由浮动太空机器人的运动计划仍然是一个开放的挑战。特别是,由于缺乏最终效果的姿势约束,当前的研究无法处理捕获非合作对象的任务。为了解决该问题,我们提出了一种新型算法,即有效的算法,以促进基于RL的方法有效提高计划准确性。我们的核心贡献是通过先验知识指导构建一项混合政策,并引入无限规范以构建更合理的奖励功能。此外,我们的方法成功地捕获了具有不同旋转速度的旋转对象。
translated by 谷歌翻译
现代神经网络能够在涉及对象分类和图像生成的许多任务中执行至少和人类。然而,人类难以察觉的小扰动可能会显着降低训练有素的深神经网络的性能。我们提供了分布稳健的优化(DRO)框架,其集成了基于人的图像质量评估方法,以设计对人类来说难以察觉而难以察觉的最佳攻击,而是针对深度神经网络造成显着损害。通过广泛的实验,我们表明我们的攻击算法比其他最先进的人类难以察觉的攻击方法产生更好的质量(对人类)的攻击。此外,我们证明了使用我们最佳设计的人类难以察觉的攻击的DRO培训可以改善图像分类中的群体公平。在最后,我们提供了一种算法实现,以显着加速DRO训练,这可能是独立的兴趣。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译